15 research outputs found

    Components and Functions of Crowdsourcing Systems – A Systematic Literature Review

    Get PDF
    Many organizations are now starting to introduce crowdsourcing as a new model of business to outsource tasks, which are traditionally performed by a small group of people, to an undefined large workforce. While the utilization of crowdsourcing offers a lot of advantages, the development of the required system carries some risks, which are reduced by establishing a profound theoretical foundation. Thus, this article strives to gain a better understanding of what crowdsourcing systems are and what typical design aspects are considered in the development of such systems. In this paper, the author conducted a systematic literature review in the domain of crowdsourcing systems. As a result, 17 definitions of crowdsourcing systems were found and categorized into four perspectives: the organizational, the technical, the functional, and the human-centric. In the second part of the results, the author derived and presented components and functions that are implemented in a crowdsourcing system

    Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments

    Get PDF
    The last couple of years have seen a fascinating evolution. While the early Web predominantly focused on human consumption of Web content, the widespread dissemination of social software and Web 2.0 technologies enabled new forms of collaborative content creation and problem solving. These new forms often utilize the principles of collective intelligence, a phenomenon that emerges from a group of people who either cooperate or compete with each other to create a result that is better or more intelligent than any individual result (Leimeister, 2010; Malone, Laubacher, & Dellarocas, 2010). Crowdsourcing has recently gained attention as one of the mechanisms that taps into the power of web-enabled collective intelligence (Howe, 2008). Brabham (2013) defines it as “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals” (p. xix). Well-known examples of crowdsourcing platforms are Wikipedia, Amazon Mechanical Turk, or InnoCentive. Since the emergence of the term crowdsourcing in 2006, one popular misconception is that crowdsourcing relies largely on an amateur crowd rather than a pool of professional skilled workers (Brabham, 2013). As this might be true for low cognitive tasks, such as tagging a picture or rating a product, it is often not true for complex problem-solving and creative tasks, such as developing a new computer algorithm or creating an impressive product design. This raises the question of how to efficiently allocate an enterprise crowdsourcing task to appropriate members of the crowd. The sheer number of crowdsourcing tasks available at crowdsourcing intermediaries makes it especially challenging for workers to identify a task that matches their skills, experiences, and knowledge (Schall, 2012, p. 2). An explanation why the identification of appropriate expert knowledge plays a major role in crowdsourcing is partly given in Condorcet’s jury theorem (Sunstein, 2008, p. 25). The theorem states that if the average participant in a binary decision process is more likely to be correct than incorrect, then as the number of participants increases, the higher the probability is that the aggregate arrives at the right answer. When assuming that a suitable participant for a task is more likely to give a correct answer or solution than an improper one, efficient task recommendation becomes crucial to improve the aggregated results in crowdsourcing processes. Although some assumptions of the theorem, such as independent votes, binary decisions, and homogenous groups, are often unrealistic in practice, it illustrates the importance of an optimized task allocation and group formation that consider the task requirements and workers’ characteristics. Ontologies are widely applied to support semantic search and recommendation mechanisms (Middleton, De Roure, & Shadbolt, 2009). However, little research has investigated the potentials and the design of an ontology for the domain of enterprise crowdsourcing. The author of this thesis argues in favor of enhancing the automation and interoperability of an enterprise crowdsourcing environment with the introduction of a semantic vocabulary in form of an expressive but easy-to-use ontology. The deployment of a semantic vocabulary for enterprise crowdsourcing is likely to provide several technical and economic benefits for an enterprise. These benefits were the main drivers in efforts made during the research project of this thesis: 1. Task allocation: With the utilization of the semantics, requesters are able to form smaller task-specific crowds that perform tasks at lower costs and in less time than larger crowds. A standardized and controlled vocabulary allows requesters to communicate specific details about a crowdsourcing activity within a web page along with other existing displayed information. This has advantages for both contributors and requesters. On the one hand, contributors can easily and precisely search for tasks that correspond to their interests, experiences, skills, knowledge, and availability. On the other hand, crowdsourcing systems and intermediaries can proactively recommend crowdsourcing tasks to potential contributors (e.g., based on their social network profiles). 2. Quality control: Capturing and storing crowdsourcing data increases the overall transparency of the entire crowdsourcing activity and thus allows for a more sophisticated quality control. Requesters are able to check the consistency and receive appropriate support to verify and validate crowdsourcing data according to defined data types and value ranges. Before involving potential workers in a crowdsourcing task, requesters can also judge their trustworthiness based on previous accomplished tasks and hence improve the recruitment process. 3. Task definition: A standardized set of semantic entities supports the configuration of a crowdsourcing task. Requesters can evaluate historical crowdsourcing data to get suggestions for equal or similar crowdsourcing tasks, for example, which incentive or evaluation mechanism to use. They may also decrease their time to configure a crowdsourcing task by reusing well-established task specifications of a particular type. 4. Data integration and exchange: Applying a semantic vocabulary as a standard format for describing enterprise crowdsourcing activities allows not only crowdsourcing systems inside but also crowdsourcing intermediaries outside the company to extract crowdsourcing data from other business applications, such as project management, enterprise resource planning, or social software, and use it for further processing without retyping and copying the data. Additionally, enterprise or web search engines may exploit the structured data and provide enhanced search, browsing, and navigation capabilities, for example, clustering similar crowdsourcing tasks according to the required qualifications or the offered incentives.:Summary: Hetmank, L. (2014). Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Summary). Article 1: Hetmank, L. (2013). Components and Functions of Crowdsourcing Systems – A Systematic Literature Review. In 11th International Conference on Wirtschaftsinformatik (WI). Leipzig. Article 2: Hetmank, L. (2014). A Synopsis of Enterprise Crowdsourcing Literature. In 22nd European Conference on Information Systems (ECIS). Tel Aviv. Article 3: Hetmank, L. (2013). Towards a Semantic Standard for Enterprise Crowdsourcing – A Scenario-based Evaluation of a Conceptual Prototype. In 21st European Conference on Information Systems (ECIS). Utrecht. Article 4: Hetmank, L. (2014). Developing an Ontology for Enterprise Crowdsourcing. In Multikonferenz Wirtschaftsinformatik (MKWI). Paderborn. Article 5: Hetmank, L. (2014). An Ontology for Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Technical Report). Retrieved from http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155187

    Social Knowledge Environments

    Get PDF
    Knowledge management represents a key issue for both information systems’ academics and practitioners, including those who have become disillusioned by actual results that fail to deliver on exaggerated promises and idealistic visions. Social software, a tremendous global success story, has prompted similarly high expectations regarding the ways in which organizations can improve their knowledge handling. But can these expectations be met, whether in academic research or the real world? This article seeks to identify current research trends and gaps, with a focus on social knowledge environments. The proposed research agenda features four focal challenges: semi-permeable organizations, social software in professional work settings, crowd knowledge, and crossborder knowledge management. Three solutions emerge as likely methods to address these challenges: designoriented solutions, analytical solutions, and interdisciplinary dialogue

    Gathering Knowledge from Social Knowledge Management Environments: Validation of an Anticipatory Standard

    Get PDF
    Knowledge management is more and more happening in social environments, supported by social software. This directly changes the way knowledge workers interact and the way information and communication technology is used. Recent studies, striving to provide a more appropriate support for knowledge work, face challenges when eliciting knowledge from user activities and maintaining its situatedness in context. Corresponding solutions in such social environments are not interoperable due to a lack of appropriate standards. To bridge this gap, we propose and validate a first specification of an anticipatory standard in this field. We illustrate its application and utility analyzing three scenarios. As main result we analyze the lessons learned and provide insights into further research and development of our approach. By that we reach out to stimulate discussion and raise support for this initiative towards establishing standards in the domain of knowledge management

    Manifesto for a Standard on Meaningful Representations of Knowledge in Social Knowledge Management Environments

    Get PDF
    Knowledge Management (KM) is a social activity. More and more organizations use social software as a tool to bridge the gap between technology- and human-oriented KM. In order to create interoperable, transferable solutions, it is necessary to utilize standards. In this paper, we analyze which standards can be applied and which gaps currently exist. We present the concept of knowledge bundles, capturing information on knowledge objects, activities and people as a prerequisite for social-focused KM. Based on our concept and examples, we derive the strong need for standardization in this domain. As a manifesto this paper tries to stimulate discussion and to enable a broad initiative working towards a common standard for the next generation of knowledge management systems. Our manifesto provides with eight recommendations how the KM community should act to address future challenges

    A Polychaete’s Powerful Punch: Venom Gland Transcriptomics of Glycera Reveals a Complex Cocktail of Toxin Homologs

    Get PDF
    © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. The article attached is the publisher's pdf

    A SYNOPSIS OF ENTERPRISE CROWDSOURCING LITERATURE

    No full text
    In the past few years, researchers have provided a desirable sense of clarity regarding the general term crowdsourcing and what it constitutes. However, with its emergence, several derivatives of the term have appeared in scientific literature. This research article focuses on enterprise crowdsourcing as one of the recent derivatives, which, du to its ambiguity, requires further discussion and clarification. Thus, the article aims to reveal the various nuances of how the term enterprise crowdsourcing is interpreted by diverse scholars. As the term has now gained reasonable momentum in available crowdsourcing literature, it is time to reflect. In this work, a systematic literature review is applied to survey different explanations of the term and to derive its constitutional characteristics. Additionally, the article provides an overview of crowdsourcing applications deployed in an enterprise context for both primary and support activities of the valu-added chain. Finally, this paper concludes with suggestions of how to prevent misinterpretation and what key qustions should be addressed in future research

    Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments

    Get PDF
    The last couple of years have seen a fascinating evolution. While the early Web predominantly focused on human consumption of Web content, the widespread dissemination of social software and Web 2.0 technologies enabled new forms of collaborative content creation and problem solving. These new forms often utilize the principles of collective intelligence, a phenomenon that emerges from a group of people who either cooperate or compete with each other to create a result that is better or more intelligent than any individual result (Leimeister, 2010; Malone, Laubacher, & Dellarocas, 2010). Crowdsourcing has recently gained attention as one of the mechanisms that taps into the power of web-enabled collective intelligence (Howe, 2008). Brabham (2013) defines it as “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals” (p. xix). Well-known examples of crowdsourcing platforms are Wikipedia, Amazon Mechanical Turk, or InnoCentive. Since the emergence of the term crowdsourcing in 2006, one popular misconception is that crowdsourcing relies largely on an amateur crowd rather than a pool of professional skilled workers (Brabham, 2013). As this might be true for low cognitive tasks, such as tagging a picture or rating a product, it is often not true for complex problem-solving and creative tasks, such as developing a new computer algorithm or creating an impressive product design. This raises the question of how to efficiently allocate an enterprise crowdsourcing task to appropriate members of the crowd. The sheer number of crowdsourcing tasks available at crowdsourcing intermediaries makes it especially challenging for workers to identify a task that matches their skills, experiences, and knowledge (Schall, 2012, p. 2). An explanation why the identification of appropriate expert knowledge plays a major role in crowdsourcing is partly given in Condorcet’s jury theorem (Sunstein, 2008, p. 25). The theorem states that if the average participant in a binary decision process is more likely to be correct than incorrect, then as the number of participants increases, the higher the probability is that the aggregate arrives at the right answer. When assuming that a suitable participant for a task is more likely to give a correct answer or solution than an improper one, efficient task recommendation becomes crucial to improve the aggregated results in crowdsourcing processes. Although some assumptions of the theorem, such as independent votes, binary decisions, and homogenous groups, are often unrealistic in practice, it illustrates the importance of an optimized task allocation and group formation that consider the task requirements and workers’ characteristics. Ontologies are widely applied to support semantic search and recommendation mechanisms (Middleton, De Roure, & Shadbolt, 2009). However, little research has investigated the potentials and the design of an ontology for the domain of enterprise crowdsourcing. The author of this thesis argues in favor of enhancing the automation and interoperability of an enterprise crowdsourcing environment with the introduction of a semantic vocabulary in form of an expressive but easy-to-use ontology. The deployment of a semantic vocabulary for enterprise crowdsourcing is likely to provide several technical and economic benefits for an enterprise. These benefits were the main drivers in efforts made during the research project of this thesis: 1. Task allocation: With the utilization of the semantics, requesters are able to form smaller task-specific crowds that perform tasks at lower costs and in less time than larger crowds. A standardized and controlled vocabulary allows requesters to communicate specific details about a crowdsourcing activity within a web page along with other existing displayed information. This has advantages for both contributors and requesters. On the one hand, contributors can easily and precisely search for tasks that correspond to their interests, experiences, skills, knowledge, and availability. On the other hand, crowdsourcing systems and intermediaries can proactively recommend crowdsourcing tasks to potential contributors (e.g., based on their social network profiles). 2. Quality control: Capturing and storing crowdsourcing data increases the overall transparency of the entire crowdsourcing activity and thus allows for a more sophisticated quality control. Requesters are able to check the consistency and receive appropriate support to verify and validate crowdsourcing data according to defined data types and value ranges. Before involving potential workers in a crowdsourcing task, requesters can also judge their trustworthiness based on previous accomplished tasks and hence improve the recruitment process. 3. Task definition: A standardized set of semantic entities supports the configuration of a crowdsourcing task. Requesters can evaluate historical crowdsourcing data to get suggestions for equal or similar crowdsourcing tasks, for example, which incentive or evaluation mechanism to use. They may also decrease their time to configure a crowdsourcing task by reusing well-established task specifications of a particular type. 4. Data integration and exchange: Applying a semantic vocabulary as a standard format for describing enterprise crowdsourcing activities allows not only crowdsourcing systems inside but also crowdsourcing intermediaries outside the company to extract crowdsourcing data from other business applications, such as project management, enterprise resource planning, or social software, and use it for further processing without retyping and copying the data. Additionally, enterprise or web search engines may exploit the structured data and provide enhanced search, browsing, and navigation capabilities, for example, clustering similar crowdsourcing tasks according to the required qualifications or the offered incentives.:Summary: Hetmank, L. (2014). Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Summary). Article 1: Hetmank, L. (2013). Components and Functions of Crowdsourcing Systems – A Systematic Literature Review. In 11th International Conference on Wirtschaftsinformatik (WI). Leipzig. Article 2: Hetmank, L. (2014). A Synopsis of Enterprise Crowdsourcing Literature. In 22nd European Conference on Information Systems (ECIS). Tel Aviv. Article 3: Hetmank, L. (2013). Towards a Semantic Standard for Enterprise Crowdsourcing – A Scenario-based Evaluation of a Conceptual Prototype. In 21st European Conference on Information Systems (ECIS). Utrecht. Article 4: Hetmank, L. (2014). Developing an Ontology for Enterprise Crowdsourcing. In Multikonferenz Wirtschaftsinformatik (MKWI). Paderborn. Article 5: Hetmank, L. (2014). An Ontology for Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Technical Report). Retrieved from http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155187

    A LIGHTWEIGHT ONTOLOGY FOR ENTERPRISE CROWDSOURCING

    No full text
    The paper introduces the prototype of a lightweight ontology for enterprise crowdsourcing. The proposed ontology aims to enhance automation and interoperability in enterprise crowdsourcing environments. The ontology engineering itself is based on a set of motivating scenarios and informal competency qustions. The designed ontology contains currently 24 classes, 22 object properties, and 30 data properties to describe the main aspects of a crowdsourcing activity. In this paper, the development process of the prototype is briefly presented on all levels of abstractions: starting from the contextual and conceptual layer (motivating scenarios and informal competency qustions), over the logical layer (data dictionary and schema), to the physical layer (implementation)
    corecore